3 research outputs found
ViSE: Vision-Based 3D Online Shape Estimation of Continuously Deformable Robots
The precise control of soft and continuum robots requires knowledge of their
shape. The shape of these robots has, in contrast to classical rigid robots,
infinite degrees of freedom. To partially reconstruct the shape, proprioceptive
techniques use built-in sensors resulting in inaccurate results and increased
fabrication complexity. Exteroceptive methods so far rely on placing reflective
markers on all tracked components and triangulating their position using
multiple motion-tracking cameras. Tracking systems are expensive and infeasible
for deformable robots interacting with the environment due to marker occlusion
and damage. Here, we present a regression approach for 3D shape estimation
using a convolutional neural network. The proposed approach takes advantage of
data-driven supervised learning and is capable of real-time marker-less shape
estimation during inference. Two images of a robotic system are taken
simultaneously at 25 Hz from two different perspectives, and are fed to the
network, which returns for each pair the parameterized shape. The proposed
approach outperforms marker-less state-of-the-art methods by a maximum of 4.4%
in estimation accuracy while at the same time being more robust and requiring
no prior knowledge of the shape. The approach can be easily implemented due to
only requiring two color cameras without depth and not needing an explicit
calibration of the extrinsic parameters. Evaluations on two types of soft
robotic arms and a soft robotic fish demonstrate our method's accuracy and
versatility on highly deformable systems in real-time. The robust performance
of the approach against different scene modifications (camera alignment and
brightness) suggests its generalizability to a wider range of experimental
setups, which will benefit downstream tasks such as robotic grasping and
manipulation
Getting the Ball Rolling: Learning a Dexterous Policy for a Biomimetic Tendon-Driven Hand with Rolling Contact Joints
Biomimetic, dexterous robotic hands have the potential to replicate much of
the tasks that a human can do, and to achieve status as a general manipulation
platform. Recent advances in reinforcement learning (RL) frameworks have
achieved remarkable performance in quadrupedal locomotion and dexterous
manipulation tasks. Combined with GPU-based highly parallelized simulations
capable of simulating thousands of robots in parallel, RL-based controllers
have become more scalable and approachable. However, in order to bring
RL-trained policies to the real world, we require training frameworks that
output policies that can work with physical actuators and sensors as well as a
hardware platform that can be manufactured with accessible materials yet is
robust enough to run interactive policies. This work introduces the biomimetic
tendon-driven Faive Hand and its system architecture, which uses tendon-driven
rolling contact joints to achieve a 3D printable, robust high-DoF hand design.
We model each element of the hand and integrate it into a GPU simulation
environment to train a policy with RL, and achieve zero-shot transfer of a
dexterous in-hand sphere rotation skill to the physical robot hand.Comment: for project website, see https://srl-ethz.github.io/get-ball-rolling/
. for video, see https://youtu.be/YahsMhqNU8o . Submitted to the 2023
IEEE-RAS International Conference on Humanoid Robot
Model-Based Disturbance Estimation for a Fiber-Reinforced Soft Manipulator using Orientation Sensing
For soft robots to work effectively in human-centered environments, they need to be able to estimate their state and external interactions based on (proprioceptive) sensors. Estimating disturbances allows a soft robot to perform desirable force control. Even in the case of rigid manipulators, force estimation at the end-effector is seen as a non-trivial problem. And indeed, other current approaches to address this challenge have shortcomings that prevent their general application. They are often based on simplified soft dynamic models, such as the ones relying on a piece-wise constant curvature (PCC) approximation or matched rigid-body models that do not represent enough details of the problem. Thus, the applications needed for complex human-robot interaction can not be built. Finite element methods (FEM) allow for predictions of soft robot dynamics in a more generic fashion. Here, using the soft robot modeling capabilities of the framework SOFA, we build a detailed FEM model of a multi-segment soft continuum robotic arm composed of compliant deformable materials and fiber-reinforced pressurized actuation chambers with a model for sensors that provide orientation output. This model is used to establish a state observer for the manipulator. Model parameters were calibrated to match imperfections of the manual fabrication process using physical experiments. We then solve a quadratic programming inverse dynamics problem to compute the components of external force that explain the pose error. Our experiments show an average force estimation error of around 1.2%. As the methods proposed are generic, these results are encouraging for the task of building soft robots exhibiting complex, reactive, sensor-based behavior that can be deployed in human-centered environments